7 research outputs found

    The feasibility of genome-scale biological network inference using Graphics Processing Units

    Full text link
    Abstract Systems research spanning fields from biology to finance involves the identification of models to represent the underpinnings of complex systems. Formal approaches for data-driven identification of network interactions include statistical inference-based approaches and methods to identify dynamical systems models that are capable of fitting multivariate data. Availability of large data sets and so-called ‘big data’ applications in biology present great opportunities as well as major challenges for systems identification/reverse engineering applications. For example, both inverse identification and forward simulations of genome-scale gene regulatory network models pose compute-intensive problems. This issue is addressed here by combining the processing power of Graphics Processing Units (GPUs) and a parallel reverse engineering algorithm for inference of regulatory networks. It is shown that, given an appropriate data set, information on genome-scale networks (systems of 1000 or more state variables) can be inferred using a reverse-engineering algorithm in a matter of days on a small-scale modern GPU cluster.https://deepblue.lib.umich.edu/bitstream/2027.42/136186/1/13015_2017_Article_100.pd

    Application of machine learning to predict reduction in total PANSS score and enrich enrollment in schizophrenia clinical trials

    Get PDF
    Clinical trial efficiency, defined as facilitating patient enrollment, and reducing the time to reach safety and efficacy decision points, is a critical driving factor for making improvements in therapeutic development. The present work evaluated a machine learning (ML) approach to improve phase II or proof-of-concept trials designed to address unmet medical needs in treating schizophrenia. Diagnostic data from the Clinical Antipsychotic Trials of Intervention Effectiveness (CATIE) trial were used to develop a binary classification ML model predicting individual patient response as either "improvement," defined as greater than 20% reduction in total Positive and Negative Syndrome Scale (PANSS) score, or "no improvement," defined as an inadequate treatment response (<20% reduction in total PANSS). A random forest algorithm performed best relative to other tree-based approaches in model ability to classify patients after 6 months of treatment. Although model ability to identify true positives, a measure of model sensitivity, was poor (<0.2), its specificity, true negative rate, was high (0.948). A second model, adapted from the first, was subsequently applied as a proof-of-concept for the ML approach to supplement trial enrollment by identifying patients not expected to improve based on their baseline diagnostic scores. In three virtual trials applying this screening approach, the percentage of patients predicted to improve ranged from 46% to 48%, consistently approximately double the CATIE response rate of 22%. These results show the promising application of ML to improve clinical trial efficiency and, as such, ML models merit further consideration and development

    Machine Learning in Drug Discovery and Development Part 1: A Primer

    Get PDF
    Artificial intelligence, in particular machine learning (ML), has emerged as a key promising pillar to overcome the high failure rate in drug development. Here, we present a primer on the ML algorithms most commonly used in drug discovery and development. We also list possible data sources, describe good practices for ML model development and validation, and share a reproducible example. A companion article will summarize applications of ML in drug discovery, drug development, and postapproval phase.Laboratorio de Investigación y Desarrollo de Bioactivo

    Modeling Predicts Kidney Function for Patients after a Kidney Transplant

    Get PDF
    Nonlinear mixed effects (NLME) models based on stochastic differential equations (SDEs) have evolved into a mature approach for analysis of PKPD data [1-3], but parameter estimation remains challenging. We present an exact-gradient version of the first order conditional estimation (FOCE) method for SDE-NLME models, and investigate whether it enables faster estimation and better gradient precision/accuracy compared to finite difference gradients
    corecore